Aiming at the problems of a lot of specialized vocabulary, sparse features, and a large degree of label confusion in pre-hospital emergency text, a Label Confusion Model (LCM)-based text classification model was proposed. Firstly, Bidirectional Encoder Representation from Transformers (BERT) was used to obtain dynamic word vectors and fully exploit semantic information of specialized vocabulary. Then, the text representation vector was generated by fusing Bidirectional Long Short-Term Memory (BiLSTM) network, weighted convolution, and attention mechanism to improve the feature extraction capability of the model. Finally, LCM was used to obtain semantic associations between text and labels, and dependencies between labels to solve the problem of a large degree of label confusion. In the experiments conducted on the pre-hospital emergency text and public news text datasets, the F1 scores of the LCM-based text classification model reached 93.46% and 97.08%, respectively, which were 0.95% to 7.01% and 0.38% to 2.00% higher than those of the models such as Text Convolutional Neural Network (TextCNN), BiLSTM, and BiLSTM-Attention, respectively. Experimental results show that the proposed model can obtain the semantic information of specialized vocabulary, extract text features more accurately, and effectively solve the problem of large degree of label confusion. At the same time, the proposed model has a certain generalization ability.